10 research outputs found

    Novel Order preserving encryption Scheme for Wireless Sensor Networks

    Get PDF
    International audienceAn Order-Preserving Encryption (OPE) scheme is a deterministic cipher scheme, whose encryption algorithm produces cipher texts that preserve the numerical ordering of the plain-texts. It is based on strictly increasing functions. It is a kind of homomorphic encryption where the homomorphic operation is order comparison. This means that comparing encrypted data provides the exact result than comparing the original data. It is attractive to be used in databases, especially in cloud ones as a method to enhance security, since it allows applications to perform order queries over encrypted data efficiently (without the need of decrypting the data). Wireless sensor network is another potential domain in which order preserving encryption can be adopted and used with high impact. It can be integrated with secure data aggregation protocols that use comparison operations to aggregate data (MAX, MIN, etc.) in a way that no decryption is being performed on the sensor nodes, which means directly less power consumption. In this paper, we will review many existing order-preserving encryption schemes with their related brief explanation, efficiency level, and security. Then, and based on the comparative table generated, we will propose a novel order-preserving encryption scheme that has a good efficiency level and less complexity, in order to be used in a wireless sensor network with an enhanced level of security

    Regard info-communicationnel sur la prise de décision. Application au choix d'une université au Liban

    Get PDF
    La décision est omniprésente dans l existence humaine. Dans nombre de cas, elle engage même le reste d une existence. Cette recherche problématise le rapport entre information et communication, en lien avec l engagement et la décision qui en résulte. Elle montre que la décision est contrainte par l information qui caractérise la situation et les éléments de connaissance communiqués. Son terrain expérimental concerne les lycéens devant choisir une université future - décision spécialement difficile au Liban, pays atypique assez complexe et influencé par la religion et la situation géopolitique.L objectif principal de cette thèse est de présenter une étude exploratoire cherchant à caractériser les facteurs majeurs qui déterminent les lycéens dans leur prise de décision. Partant de l information telle qu elle est projetée dans la représentation documentaire, elle s attache concrètement à étudier les formes de la médiation entre une université et un futur candidat lycéen, dans une optique de prise de décision. L étude examine les expériences vécues par des étudiants en première année universitaire, qui ont donc définitivement accompli leur prise de décision ; elle établit des liens entre la théorie de l engagement et la décision, et vise à définir des liaisons entre l information construite à partir d un document, le processus décisionnel et l ingénierie de l aide à la décision. L originalité de cette recherche réside dans la mise en œuvre d une méthode d écoute des besoins et de hiérarchisation des attentes, développée par le laboratoire DeVisU à l Université de Valenciennes, basée sur le croisement et la complémentarité de méthodes qualitatives et quantitatives.L apport principal de cette recherche est de dévoiler quelles caractéristiques des institutions universitaires emportent la décision des lycéens. Car sous l effet de la concurrence nationale et internationale, la plupart des universités soignent leur attractivité et se voient contraintes de développer de nouvelles stratégies de recrutement. L enjeu est de retenir les meilleurs étudiants. Ainsi, pour la population étudiée, ce n est pas l acquisition de connaissances mais la préparation au marché du travail qui importe le plus. En outre, a contrario de nombre d idées reçues, c est la médiation humaine qui s avère avoir le pouvoir d instaurer une communication engageante et en aucun cas celle des systèmes documentaires, même de technologie avancée.The decision is omnipresent in human existence. In many cases, it even engages the remains of an existence. This research emphasizes the link between information and communication, which are related to the commitment and the resulting decision. The research shows that the decision is constrained by information characterizing the situation and elements which are provided by knowledge. Its experimental framework concerns students, who are bound to choose faculties and universities a decision relatively difficult to be taken in Lebanon, an atypical country quite complex and influenced by religion and geopolitical situation.The main objective of this thesis is to present an exploratory study in order to characterize the major factors which determine the students decision-making. Based on the information as it is projected into the documentary presentation, it literally focuses on the study of mediation between university and a future high school candidate in a decision-making framework. The study discusses the experiences of students in their first academic year, so they have already made their decision; it establishes links between the theory of commitment and decision-making. It also seeks to define links between information built from a document, decision-making and decision support. The originality of this research lies in the implementation of a method of listening to the needs and hierarchy of needs, developed by DeVisU laboratory at the University of Valenciennes, based on the intersection and as a complement for qualitative and quantitative methods.The main contribution of this research is to reveal which characteristics of academic institutions outweigh the decision of students. As a result of national and international competition, most universities treat their attractiveness and are forced to develop new recruitment strategies. The challenge is to retain the best students. Thus, for the population being studied, it is not knowledge but preparation of labor market that matters most. In addition to that, and in contrast to received wisdom, it is human mediation that has the power to create an engaging communication and in any case, the documentation systems, even of advanced technology.VALENCIENNES-Bib. électronique (596069901) / SudocSudocFranceF

    Using Deep Learning for Object Distance Prediction in Digital Holography

    No full text
    International audienceDeep Learning (DL) has marked the beginning of a new era in computer science, particularly in Machine Learning (ML). Nowadays, there are many fields where DL is applied such as speech recognition, automatic navigation systems, image processing, etc [1]. In this paper, a Convolutional Neural Network (CNN), more precisely a CNN built on top of DenseNet169, is proven to be helpful in predicting object distance in computer-generated holographic images. The problem is addressed as a classification problem where 101 classes of images were generated, each class corresponding to a different distance value from the object at a micrometer scale. Experiments show that the proposed network is efficient in this context, being able to classify with a 100% accuracy level if trained properly

    A Real-Time Massive Data Processing Technique for Densely Distributed Sensor Networks

    No full text
    International audienceToday, we are awash in a flood of data coming from different data generating sources. Wireless sensor networks (WSNs) are one of big data contributors where data is being collected at unprecedented scale. Unfortunately, much of this data is of no interest, meaningless and redundant. Hence, data reduction an is becoming fundamental operation in order to decrease the communication costs and enhance data mining in WSNs. In this work, we propose a two-level data reduction approach for sensor networks. The first level operated by the sensor nodes consists on compressing collected data while using the Pearson coefficient. The second level is executed at intermediate nodes (e.g. aggregators, cluster heads, etc.). The objective of the second level is to eliminate redundant data generated by neighboring nodes using two adapted clustering methods: EKmeans and TopK. Through both simulations and real experiments on real telosB sensors, we show the relevance of our approach in terms of minimizing the big data collected in WSNs and enhancing network lifetime, compared to other existing techniques

    An energy-efficient data prediction and processing approach for the internet of things and sensing based applications

    No full text
    International audienceThe Internet of Things (IoT) is a vision in which billions of smart objects are linked together. In the IoT, “things” are expected to become active and enabled to interact and communicate among themselves and with the environment by exchanging data and information sensed about the environment. In this future interconnected world, multiple sensors join the internet dynamically and use it to exchange information all over the world in semantically interoperable ways. Therefore, huge amounts of data are generated and transmitted over the network. Thus, these applications require massive storage, huge computation power to enable real-time processing, and high-speed network. In this paper, we propose a data prediction and processing approach aiming to reduce the size of data collected and transmitted over the network while guaranteeing data integrity. This approach is dedicated to devices/sensors with low energy and computing resources. Our proposed technique is composed of two stages: on-node prediction model and in-network aggregation algorithm. The first stage uses the Lagrange interpolation polynomial model to reduce the amount of data generated by sensor nodes while, the second stage uses a statistical test, i.e. Kolmogorov-Smirnov, and aims to reduce the redundancy between data generated by neighbouring nodes. Simulation on real sensed data reveals that the proposed approach significantly reduces the amount of data generated and transmitted over the network thus, conserving sensors’ energies and extending the network lifetime

    Major earthquake event prediction using various machine learning algorithms

    No full text
    International audienceAt least two basic categories of earthquake prediction exist: short-term predictions and forecast ones. Short term earthquake predictions are made hours or days in advance, while forecasts are predicted months to years in advance. The majority of studies are done on forecast, taking into consideration the history of earthquakes in specific countries and areas. In this context, the core idea of thiswork is to predict whereas an event is classified as negative or positive major earthquake by applying different machine learningalgorithms. Eight different algorithms have been applied on a real earthquake dataset, namely: Random Forest, Naive Bayes, LogisticRegression, MultiLayer Perceptron, AdaBoost, K-nearest neighbors, Support Vector Machine, and Classification and Regression Trees. Foreach selected model, various hyperparameters have been selected, and obtained prediction results have been fairly compared using variousmetrics, leading to a reliable prediction of major events for 3 of them

    Anomalies and breakpoint detection for a dataset of firefighters' operations during the COVID-19 period in France

    No full text
    International audienceFirefighters are exposed to many hazards. The main objective of this study is to apply machine learning techniques to tailor the need for firemen operations to their demands. This strategy enables fire departments to organize their resources, which leads to a reduction of human, material and financial requirements. This work focuses on predicting the number of firefighters' interventions during the sensitive period of the global pandemic COVID-19. Experiments applied to a dataset from 2016 to 2021 provided by the Fire and Rescue Department, SDIS 25, in the region Doubs-France have shown an accurate prediction and revealed the existence of a turning point in August 2020 due to an increase in coronavirus cases in France

    A Distributed Processing Technique for Sensor Data Applied to Underwater Sensor Networks

    Get PDF
    International audienceWireless sensor networks (WSN) present a low cost solution to enhance our lives. They allow a large variety of applications. One of the major challenges faced by WSN is that of energy saving. A well known efficient way to reduce energy consumption is data reduction. It consists in reducing the amount of data sensed and transmitted to the sink. Consequently, sensor data communication should be minimized in order to increase network lifetime. In this paper, we propose an energy-efficient data reduction technique based on a clustering architecture. Our objective is to identify and study data similarities at both sensor and cluster-head (CH) levels. At the first level, each sensor sends a set of representative points to the CH at each period, instead of sending the raw data. When data points are received by the CH, it uses the Euclidean distance in order to eliminate redundant data generated by neighboring sensor nodes, before sending them to the sink. To validate our approach, we applied our techniques on real underwater sensor data and we compared them with other existing data reduction methods. The results show the effectiveness of our technique in terms of improving the energy consumption and the network lifetime, without loss in data fidelity

    Efficient anomaly detection on sampled data streams with contaminated phase I data

    No full text
    International audienceControl chart algorithms aim to monitor a process over time. This process consists of two phases. Phase I, also called the learning phase, estimates the normal process parameters, then in Phase II, anomalies are detected. However, the learning phase itself can contain contaminated data such as outliers. If left undetected, they can jeopardize the accuracy of the whole chart by affecting the computed parameters, which leads to faulty classifications and defective data analysis results. This problem becomes more severe when the analysis is done on a sample of the data rather than the whole data. To avoid such a situation, Phase I quality must be guaranteed. The purpose of this paper is to introduce a new approach for applying EWMA chart to obtain accurate anomaly detection results over sampled data even if contaminations exist in Phase I. The new chart is applied to a real dataset, and its performance is evaluated on both sampled and not sampled data according to several criteria
    corecore